3 Differential Equations
3.1 Basic Notions
A differential equation is any sort of equation involving a function (or functions) and their derivatives. For example,
\[f'(x) = f(x)\]
or
\[\sin(x^2)y + y''x - 3x^2 = \cos(x)\]
or
\[\begin{align*} X' &= 2XY - X^2 Y\\ Y' &= Y^5 + XY^2 \end{align*}\]
The order is the highest derivative involved. The first and third examples are 1st order, and the second one is 2nd order. We won’t consider differential equations with partial derivatives.
One way to think about a differential equation is that it is a rule constraining the growth (velocity, acceleration, and higher derivatives) of a function in terms of its own values. For example, the first equation describes exponential growth (or decay) where a function increases in proportion to its own size.
Usually one function is considered unknown, and we’re interested in understanding the behavior of the function in terms of the differential equation it satisfies. Many functions can satisfy a given differential equation, but in typical circumstances a single function can be picked out by specifying “initial conditions”, values of the unknown function (and some of its derivatives) at a particular point. For example, \[f'(x) = f(x)\] with the initial condition \(f(0) = 1\) uniquely determines the exponential function \(f(x) = e^x\).
In the case of a system of equations, like the third example, we can think of it as a vector-valued function, so an initial condition would specify both \(X\) and \(Y\), such as \(X(0) = 2\) and \(Y(0) = 5\).
Definition 3.1 An order \(n\) differential equation in unknown function \(x\) with variable \(t\) can be written as \[F(x,x',...,x^{(n)},t) = 0.\]
When \(n\) initial conditions \(x(t_0) = x_0, x'(t_0) = x'_0,...\) are specified, this is called an initial value problem.
You’ve seen many of these before: integrating a function \(f(x)\) by finding an antiderivative means looking for a function \(F(x)\) which satisfies the equation \[F'(x) = f(x).\] The bounds of the integral turn into initial conditions: an integral from \(a\) to \(X\) is the same as the initial condition \(F(a) = 0\).
3.2 Existence-Uniqueness
Not all initial value problems have solutions. From calculus, we know that the initial value problem associated to integrating a continuous function on a closed interval has a solution (unique, even). More generally
Theorem 3.1 Consider a first-order differential equation \[F(x,x',t) = 0\] with an initial condition \((t_0,x_0)\). Consider the rectangle \([t_0 - A, t_0 A] \times [ x_0 - B, x_0 + B ]\) centered at this point. Suppose that \(F\) is continuous on this rectangle and bounded by \(M\).
Then this initial value problem has a solution for all \(t\) such that \[|t_0 - t | \leq \min \{A,B/M\}.\] (a sub-rectangle of the original rectangle)
Most IVPs have solutions, but even relatively simple ones need not have a unique solution.
Example 3.1 Consider the differential equation
\[f'(x) = \frac 1 x.\]
Let \(x_0\) be some nonzero number. Every initial value problem \(f(x_0) = y_0\) has infinitely many solutions.
Indeed, suppose \(x_0 > 0\). Then one solution is \[f(x) = \ln(x) - \ln(x_0) + y_0\] but one can add any constant to the piece of \(f(x)\) left of zero and obtain another solution. The opposite can be done (inserting some absolute values) if \(x_0\) is negative.
(Calculus II courses sometimes gloss over this: the antiderivative of \(\frac 1 x\) is not \(\ln|x| + C\); the constant can be specified independently on each half of the \(x\)-axis)
In the example, the lack of continuity plays a major role in allowing the differential equation to have infinitely many solutions.
Theorem 3.2 Continue from Theorem 3.1 and further assume that \(\partial f/\partial x\) is continuous. Then the solution is unique.
A “nice” initial value problem has a unique solution. Most of the IVPs we encounter from applied questions are nice or nearly so – they describe physical phenomena, which we generally expect to unfold in the same manner when set up identically.
The proofs of these results is beyond the scope of our course, but to give you a sense of the ideas at play, we will prove the uniqueness result in the special case of integration.
Theorem 3.3 Consider the initial value problem \[y' = f(t)\]
If \(f\) is continuous on \([a,b]\), then any initial value problem \(f(a) = A\) has a unique solution on \([a,b]\).
Proof. Existence follows from the fact that continuous functions are integrable on closed intervals.
Suppose that \(y_1\) and \(Y_2\) are two solutions to this IVP, meaning that both
\[\begin{align*} y_1' &= f(t)&y_1(a) = A\\ y_2' &= f(t)&y_2(a) = A \end{align*}\]
We would like to show that \(y_1 = y_2\), or equivalently that \(y_1-y_2 = 0\). Subtracting the IVPs for \(y_1,y_2\) we see that, \(z=y_1-y_2\) satisfies the IVP
\[z' = 0\ \ \ \ z(a) = 0.\]
If \(z\neq 0\) then we can find some \(c \in [a,b]\) such that \(z(c) \neq 0\). Apply the MVT to \(z\) on \([a,c]\): there is some \(\zeta \in (a,c)\) such that \[z'(\zeta) = \frac{z(c) - z(a)}{c-a} = \frac{z(c)}{c-a},\]
using \(z(a) = 0\). The RHS is nonzero by assumption, yet \(z'(\zeta) = 0\) because of the IVP \(z\) satisfies. This is a contradiction, so there is no such \(c\).
One can adapt this to show, more generally, that the only solution to \(y' = 0\) on a closed interval is a constant function. These are the source of the ubiquitous “\(+C\)” in Calculus II. The “counterexample” with the logarithm is explained by the impossibility of integrating \(1/x\) across an interval containing zero.
3.3 Applications
The Existence-Uniqueness theorems are among the most useful and interesting facts we’ll encounter. They suggest the following general principle:
An initial value problem is a definition (in nice conditions).
Which is to say that “solving” a differential equation by working out an “exact” solution in terms of familiar functions is not necessarily a priority for us. Later in the course, we will develop tools for approximating values of functions from differential equations, which is usually all one wants from the “solution” (and the values of standard functions – \(\sqrt, \sin\), etc – are usually decimal approximations anyway!).
At this point, we can apply the principle above to give “proofs” of interesting functional relationships. Filling in the details from real analysis would take us beyond the background we assume for this book, but it can be a useful exercise to work this out if you know that material.
Consider two functions \(f\) and \(g\) defined by the following initial value problem:
\[\begin{align*} f' &= g & f(0) &= 0,\\ g' &= -f & g(0) &= 1. \end{align*}\]
You may recognize these as \(\sin\) and \(\cos\). This differential equation arises naturally from considering a harmonic oscillator – it is equivalent to \(f'' = -f\) with \(f(0) = 0, f'(0) = 1\) and setting \(g = f'\). This comes directly from Hooke’s law with simplified constants.
We can prove some trig identities; for example, let’s show that \[f^2 + g^2 = 1.\]
First, we can show the left hand side is constant by verifying that its derivative is zero: \[(f^2 + g^2)' = 2ff' + 2gg' = 2fg - 2gf = 0,\]
and the constant can be found by evaluating the left hand side anywhere – such as \(x=0\) \[f(0)^2 + g(0)^2 = 0 + 1 = 1.\]
The pythagorean identity implies that \(f^2 + g^2 = 1\) on any connected set containing \(0\) where both functions are defined (by definitions they are differentiable anywhere they are defined!). Showing such \(f\) and \(g\) exist, and are even defined on all real numbers, is much harder, although the fact that they give us the position and velocity of an idealized spring is pretty good evidence that they should!
Continuing with the definition of the trig functions as above, we can verify the double-angle formula for \(\sin\). Our goal is to prove \[f(2x) = 2f(x)g(x).\]
We will show that both the LHS and RHS satisfy the same initial value problem, in which case our general principle implies they define the same function (this is deeper than mere MVT!). Let \(k(x)\) and \(h(x)\) be the left and right hand sides, respectively.
Then we see that \[\begin{align*} k'(x) &= 2g(2x)\\$$ k''(x) &= 4g'(2x)\\ &= -4f(2x)\\ &= -4k(x) \end{align*}\]
and also \[\begin{align*} h'(x) &= 2f(x)g'(x) + 2f'(x)g(x)\\ &= -2f(x)^2 + 2g(x)^2$$ h''(x) &= -4f(x)f'(x) - 4g(x)g'(x)\\ &= -8f(x)g(x)\\ &= -4h(x) \end{align*}\]
Both sides satisfy \(F'' = -4F\), and by evaluating each and their first derivative at zero, it is easy to see they agree on initial conditions as well, so they coincide.